A quick update on yesterday’s "Iraq the Vote" post. Kathy Frankovic, polling director for CBS News, emails with two questions they asked on a survey last year at the suggestion of Peter Feaver, then a Political Science professor at Duke. Here are the results and the full text of the questions (n=1,042 adults, conducted 4/23-27/2004):
56. Looking back, do you think the United States did the right thing in taking military action against Iraq, or should the U.S. have stayed out?
47% – Right thing
46% – Stay out57. Regardless of whether you think taking military action in Iraq was the right thing to do, would you say that the U.S. is very likely to succeed in establishing a democratic government in Iraq, somewhat likely to succeed, not very likely to succeed, or not at all likely to succeed in establishing a democratic government there?
10% – Very likely
40% – Somewhat likely
31% – Not very likely
15% – Not at all likely
Frankovic points out this survey occurred during a low ebb in support for the Iraq war. At the time, only 38% thought things were going well in Iraq while Bush’s overall job rating was net negative: 46% approved and 47% disapproved of his performance as President.
Mark
I once tried to pull together any polls about military conflicts over the last ten years that contained estimates of “success”.
There weren’t many.
CBS also asked this question ten times between March and April 2003 (this from Roper Center):
QUESTION:
Regarding the war with Iraq, which of the following do you think is most likely: a fairly quick and successful effort, or a long and costly involvement?
RESULTS:
Quick and successful – 43%
Long and costly – 47
Don’t know/No answer – 10
ORGANIZATION CONDUCTING SURVEY: CBS NEWS
POPULATION: National adult
NUMBER OF PARTICIPANTS: 950
INTERVIEW METHOD: Telephone
BEGINNING DATE: April 2, 2003
I’d be curious to see partisan breakdowns on that question. Or breakdowns by Bush support (the PIPA study on Iraq misperceptions suggests that support for Bush, rather than partisanship is the direct predictor of perceptions).
exactly–my data shows voting for bush matters more than pid or ideology for explaining hypothetical tolerance for casualties in Iraq. It’s seen as Bush’s war.
Adam,
MP gets “results” again – CBS emailed with a link to their original PDF release (which I have also added above):
http://www.cbsnews.com/htdocs/CBSNews_polls/042804_poll.pdf
The release crosstabs every question by party. The tables do show a strong relationship between these questions and party ID, although a fair number of Democrats believed the war was the right thing to do (24%) and that the US would be successful in bringing Democracy to Iraq (35%).
Mike (responding to the comments from the earlier post): Thanks! I’m quite interested in reading your paper on this topic–if you can, send me a copy or post a link here.
Paul,
Try this:
http://www2.chass.ncsu.edu/cobb/me/index.html
Then, click on my working papers link and the paper should be the first one listed under working papers. Sorry I can’t figure out how to make a direct link like Adam.
This paper is not quite indicative of what I’ve been saying because it hasn’t really been updated since Feb and we are working on two papers and trying to get TESS to fund an experiment that takes off from this study.
Another caveat: this is experimental research and not a general population survey, so I should have been more clear. Us experimentalists often forget that students are just students.
Mike: That worked. Thanks.
Hi All,
Thanks for the thoughtful comments. I am thrilled to have folks like you thinking about our work.
In response to some of the comments above, yes, like CBS, we find a strong relationship between thinking Bush “did the right thing,” thinking the US will succeed, and approving of Bush overall. Also, like CBS, we found the relationship is closer for “right thing” than for success. For example, 35% of those who disapprove of Bush still think we will succeed in Iraq. Only 20% of those who disapprove of Bush think that he “did the right thing” in attacking Iraq.
Our poll numbers on success (done at the same time as CBS) show overall confidence in success about 15 points higher. The question wording is nearly identical, so I don’t have an immediate answer for the gap except that this may be question ordering effects. In each of our surveys the questions on “right thing”, casualty tolerance and success came first, while in the CBS survey it comes much later – after much discussion of Iraq. But this is just speculation. Like CBS, however, we do find that was a low point in public confidence.
Most importantly, however, our central conclusion – that perceived likelihood of success is the most important factor in driving casualty tolerance in Iraq – holds regardless of whether we control for party ID or control for approval of Bush. (Although both of those variables matter too). Moreover, the relationship holds even if we do separate analyses among those who approve of Bush and those who disapprove.
Best,
Chris
Chris,
I like your papers very much. I have some questions too, if that’s ok.
First, a comment–I am ready to believe a mixture of retrospective and prospective evaluations explain war opinions very well, and that they, especially in combination, mediate casualty sensitivity. I would like to see them all used as independent variables, though, at the same time, to explain war support.
Question: Why have you and your colleagues never measured respondents’ actual estimates of casualties? I’ve done this with students and Adam did this with adults, and we both find that estimates are more random than not, and almost meaningless as explanatory variables. It would seem that if people don’t know how many casualties are taking place, that it is difficult to believe the causal mechanism implied by aggregate models of the effects of casualties since they assume citizens hold this knowledge.
Second, if I am reading everything correctly, why is tolerance for casualties not used as an independent variable when you have the individual level data? If I am looking at it right, the logic of the paper was to model the aggregate data to show that casualties mediate war opinions, depending on when they occur, but not as much as prior thinking would lead you to believe. Once established in the paper that casualties somewhat “matter”, the question moved to, what are the determinents of casualty sensitivity? Why not use casualty sensitivity as an attitude to directly explain, say, Bush approval?
Next, have you ever measured casualty tolerance differently? Why measure tolerance as a categorical variable, for exmaple, as opposed to an open ended question where you can later code it categorically if you wish? Boettcher and I ask respondents to freely say any number, as did Adam, and we then use that information to create many kinds of variables representing casualty tolerance. For example, we can subtract estimates of actual deaths from tolerance, code tolerance using your 6-pt scheme, take the natural log of it, etc. Different coding schemes produce somewhat different conclusions, and I’m not sure what this all means.
I guess another question would be, if the argument is that americans are not as casualty sensitive as prior work suggested, why spend so much time trying to explain casualty sensitivity at all? I think it matters a lot, but unless you show its effects as an IV, I can’t tell. So I am surprised that you don’t use it as an IV. Or did you and I am just dense?
Some other thoughts: one stated implication is that people are behaving rationally. But if they don’t know the information about casualties, all they can really do is behave LIKE they are acting rationally. They must be taking cues, from elites or the tone of media coverage. My guess is that casualties allow elites to be more critical, and thus casualties are mediated events in the aggregate, and direct (and maybe temporary) effects when people are exposed to a single story about battle deaths.
Also, if you didn’t check it out in your data, voting for Bush proved more important than PID or ideology for explaining casualty tolerance.
thanks!
MC
According to the pdf of the CBS poll numbers, the poll was “weighted” by CBS so that the repondents were 299 Republicans (28.7%), 363 Democrats (34.8%), and 380 Independents (36.5%). This weighting was accomplished by deleting from the raw totals some Republicans and adding numbers to both the Democrats and Independants. As “weighted” by CBS, the poll showed 6% more Democrats than Republicans, which could explain some of the poll numbers.
civil war,
just making sure, weighting is not adding and subtracting, literally. It has that effect, by making the answer of individuals within certain groups count for more or less than 1. The reason is that the polling outfit believes their sample is not quite representative of reality, and to make it so, they weight the answers. PID is a controversial variable to weight because its properties are not perfectly known. Regardless, the numbers in that poll seem accurate when comapred to others and over time.
Mike, this 2004 CBS poll was “weighted” so that there’d be 6.1% fewer Republicans than Democrats. Take a look at the party identification results for all 2004 surveys, as given in a past “Mystery Pollster” table:
CBS–30% GOP, 34% Dem
Gallup–34%-34%
Pew–30%-33%
Harris–31%-34%
Annenberg–32%-35%
Note that the HIGHEST spread is 4% (CBS’s own, which as has been noted elsewhere is consistently lowest in conservative identification), and 3% the more usual. In fact, CBS’s own polls gave the 2004 spread as 4%. And 4% was the spread in the raw numbers of this particular CBS poll prior to the “weighting”. So yes, the “weighting” done by CBS here does not “seem accurate when compared to others and over time”.
Civil War Guy:
You are misreading the CBS release. Their survey was “weighted” but not by party identification.
Like most public pollsters, they typically weight samples of all adults by gender, age, race & education to match US Census estimates for the population.
The net result typically makes the the results a bit more Democratic, but the weighting is not accomplished by manipulating the percentages of Democrats and Republicans.
I discussed this issue at length last fall starting here:
http://www.mysterypollster.com/main/2004/09/why_how_pollste.html
Mark, I didn’t say that CBS weighted the survey by party identification, so thus I didn’t “misread” the CBS release. I pointed out that the result of their weighting was to overstate Democrat/understate Republican identification. We agree on this point, evidently.
Chris,
My question on your central conclusion is this: do you really have much confidence the casualty tolerance item?
I was always very concerned by the marginals on those measures, and worry that you and Peter are placing far too much intepretive weight on what is a question that is potentially fatally riddled with measurement error.
(Some honesty in advertising may be appropriate here; Chris, Adam, and I all went to U of M grad school; Paul was a grad student at UNC while I taught for a while at Duke along with Peter and Chris; and I worked on an early part of the project that Chris and Peter are writing about.)
paul,
I agree that the tolerance measure has a lot of measurement error, although we might be thinking different things about the source and direction of it. Yet, Boettcher and I have measured it differently and it works roughly the same. We allow people to say any number of deaths. We take the log, create scales, divide at the median, etc, and it doesn’t usually affect the substantive conclusion. Some coding schemes produce better R-squares for overall model fits, but that is usually the extent of it. So, for whatever reasons, the measure is robust. Ironically, if I take the continuous variable we measure and slap on their 6-pt coding scheme, it “works” better than any other way I can code it except for its natural log.
I’d be happy to get into a discussion of why the measure works, how it should ideally be measured, etc., if anyone is interested….
Mike,
My memory aligns with your description–it really isn’t a continuous measure at all. If I recall, basically the response set is:
0 deaths
1 death (essentially: anything more than zero)
2 deaths (same: anything greater than one)
10
100
and then odd figures. I remember things like “1,000,000” which is an absurd number. Those are the responses that make me worry that it has lots of measurement error.
I’m not sure we’d agree on what “robust” means. I’d say it was robust if it worked with different populations and in different settings, *and* you showed that it measured what you thought it measured. Simply showing that it stands up to some changes in scaling is not sufficient.
Do we *really* know that it’s measuring casualty sensitivity or just perceptions of what *might* be casualties in conflicts that *might* occur? And do we know anything at all about how responses to real body bags correlates with this perceptual measure?